Large language models (LLMs) have been shown to be able to perform new tasks based on a few demonstrations or natural language instructions. While these capabilities have led to widespread adoption, most LLMs are developed by resource-rich organizations and are frequently kept from the public. As a step towards democratizing this powerful technology, we present BLOOM, a 176B-parameter open-access language model designed and built thanks to a collaboration of hundreds of researchers. BLOOM is a decoder-only Transformer language model that was trained on the ROOTS corpus, a dataset comprising hundreds of sources in 46 natural and 13 programming languages (59 in total). We find that BLOOM achieves competitive performance on a wide variety of benchmarks, with stronger results after undergoing multitask prompted finetuning. To facilitate future research and applications using LLMs, we publicly release our models and code under the Responsible AI License.
translated by 谷歌翻译
在这项工作中,我们建议使用分布式样本,即来自目标类别外部的未标记样本,以改善几乎没有记录的学习。具体而言,我们利用易于可用的分布样品来驱动分类器,以避免通过最大化原型到分布样品的距离,同时最大程度地减少分布样品的距离(即支持,查询数据),以避免使用分类器。。我们的方法易于实施,不可知论的是提取器,轻量级,而没有任何额外的预训练费用,并且适用于归纳和跨传输设置。对各种标准基准测试的广泛实验表明,所提出的方法始终提高具有不同架构的预审计网络的性能。
translated by 谷歌翻译
卷积神经网络(CNN)在各种应用中表现出卓越的性能,但具有较高的计算复杂性。量化用于降低CNN的延迟和存储成本。在量化方法中,二进制重量网络(BWN和TWNS)在8位和4位量化方面具有独特的优势。他们用加法替代CNN中的乘法操作,这些操作在内存计数(IMC)设备上受到青睐。 BWNS的IMC加速度已被广泛研究。但是,尽管TWN的精度比BWN具有更高的准确性和更好的稀疏性,但IMC的加速度的研究有限。现有的IMC设备上的TWN效率低下,因为稀疏性无法很好地利用,并且加法操作效率不高。在本文中,我们建议FAT作为TWN的新型IMC加速器。首先,我们提出了一个稀疏的加法控制单元,该单元利用TWN的稀疏度跳过了零重量的无效操作。其次,我们提出了一个基于内存感知器的快速添加方案,以避免携带传播的时间开销并将其写回记忆单元。第三,我们进一步提出了一个组合的数据映射,以减少激活和权重的数据移动,并增加跨内存列的并行性。仿真结果表明,与最先进的IMC加速器Parapim相比,对于感官放大器水平上的加法操作,FAT达到2.00倍加速度,1.22倍功率效率和1.22倍面积效率。与帕拉皮姆(Parapim)相比,脂肪达到10.02倍的加速度和12.19倍的能量效率,而平均稀疏性为80%的网络。
translated by 谷歌翻译
虽然我们注意临床自然语言处理(NLP)的最新进展,但我们可以注意到临床和翻译研究界的一些抵抗,因为透明度,可解释性和可用性有限,采用NLP模型。在这项研究中,我们提出了一种开放的自然语言处理开发框架。我们通过实施NLP算法为国家Covid队列协作(N3C)进行了评估。基于Covid-19相关临床笔记的信息提取的利益,我们的工作包括1)使用Covid-19标志和症状作为用例的开放数据注释过程,2)一个社区驱动的规则集合平台,3)合成文本数据生成工作流程,用于生成信息提取任务的文本而不涉及人为受试者。 Corpora来自来自三个不同机构的文本(Mayo Clinic,肯塔基州大学,明尼苏达大学)。用单个机构(Mayo)规则集进行了金标准注释。这导致了0.876,0.706和0.694的F-Scors分别用于Mayo,Minnesota和肯塔基测试数据集。作为N3C NLP子群体的联盟努力的研究表明,创建联邦NLP算法开发和基准测试平台的可行性,以增强多机构临床NLP研究和采用。虽然我们在这项工作中使用Covid-19作为用例,但我们的框架足以适用于临床NLP的其他兴趣领域。
translated by 谷歌翻译
Diabetic Retinopathy (DR) is a leading cause of vision loss in the world, and early DR detection is necessary to prevent vision loss and support an appropriate treatment. In this work, we leverage interactive machine learning and introduce a joint learning framework, termed DRG-Net, to effectively learn both disease grading and multi-lesion segmentation. Our DRG-Net consists of two modules: (i) DRG-AI-System to classify DR Grading, localize lesion areas, and provide visual explanations; (ii) DRG-Expert-Interaction to receive feedback from user-expert and improve the DRG-AI-System. To deal with sparse data, we utilize transfer learning mechanisms to extract invariant feature representations by using Wasserstein distance and adversarial learning-based entropy minimization. Besides, we propose a novel attention strategy at both low- and high-level features to automatically select the most significant lesion information and provide explainable properties. In terms of human interaction, we further develop DRG-Net as a tool that enables expert users to correct the system's predictions, which may then be used to update the system as a whole. Moreover, thanks to the attention mechanism and loss functions constraint between lesion features and classification features, our approach can be robust given a certain level of noise in the feedback of users. We have benchmarked DRG-Net on the two largest DR datasets, i.e., IDRID and FGADR, and compared it to various state-of-the-art deep learning networks. In addition to outperforming other SOTA approaches, DRG-Net is effectively updated using user feedback, even in a weakly-supervised manner.
translated by 谷歌翻译
We introduce an approach for the answer-aware question generation problem. Instead of only relying on the capability of strong pre-trained language models, we observe that the information of answers and questions can be found in some relevant sentences in the context. Based on that, we design a model which includes two modules: a selector and a generator. The selector forces the model to more focus on relevant sentences regarding an answer to provide implicit local information. The generator generates questions by implicitly combining local information from the selector and global information from the whole context encoded by the encoder. The model is trained jointly to take advantage of latent interactions between the two modules. Experimental results on two benchmark datasets show that our model is better than strong pre-trained models for the question generation task. The code is also available (shorturl.at/lV567).
translated by 谷歌翻译
In the era of Internet of Things (IoT), network-wide anomaly detection is a crucial part of monitoring IoT networks due to the inherent security vulnerabilities of most IoT devices. Principal Components Analysis (PCA) has been proposed to separate network traffics into two disjoint subspaces corresponding to normal and malicious behaviors for anomaly detection. However, the privacy concerns and limitations of devices' computing resources compromise the practical effectiveness of PCA. We propose a federated PCA-based Grassmannian optimization framework that coordinates IoT devices to aggregate a joint profile of normal network behaviors for anomaly detection. First, we introduce a privacy-preserving federated PCA framework to simultaneously capture the profile of various IoT devices' traffic. Then, we investigate the alternating direction method of multipliers gradient-based learning on the Grassmann manifold to guarantee fast training and the absence of detecting latency using limited computational resources. Empirical results on the NSL-KDD dataset demonstrate that our method outperforms baseline approaches. Finally, we show that the Grassmann manifold algorithm is highly adapted for IoT anomaly detection, which permits drastically reducing the analysis time of the system. To the best of our knowledge, this is the first federated PCA algorithm for anomaly detection meeting the requirements of IoT networks.
translated by 谷歌翻译
Finetuning language models on a collection of datasets phrased as instructions has been shown to improve model performance and generalization to unseen tasks. In this paper we explore instruction finetuning with a particular focus on (1) scaling the number of tasks, (2) scaling the model size, and (3) finetuning on chain-of-thought data. We find that instruction finetuning with the above aspects dramatically improves performance on a variety of model classes (PaLM, T5, U-PaLM), prompting setups (zero-shot, few-shot, CoT), and evaluation benchmarks (MMLU, BBH, TyDiQA, MGSM, open-ended generation). For instance, Flan-PaLM 540B instruction-finetuned on 1.8K tasks outperforms PALM 540B by a large margin (+9.4% on average). Flan-PaLM 540B achieves state-of-the-art performance on several benchmarks, such as 75.2% on five-shot MMLU. We also publicly release Flan-T5 checkpoints, which achieve strong few-shot performance even compared to much larger models, such as PaLM 62B. Overall, instruction finetuning is a general method for improving the performance and usability of pretrained language models.
translated by 谷歌翻译
在不同的运动模式之间切换(例如,楼梯上升/下降,坡道上升/下降)时,动力的假肢腿必须预见用户的意图。许多数据驱动的分类技术已经证明了预测用户意图的有希望的结果,但是这些意图预测模型对新主题的表现仍然不受欢迎。在其他域(例如,图像分类)中,通过从大型数据集(即预训练的模型)中使用先前学习的功能,然后将此学模型转移到可用的新任务中,可以提高转移学习的精度。在本文中,我们开发了一个基于人类运动数据集的内部受试者(受试者)和主体间(主体独立)验证的深卷卷神经网络。然后,我们使用剩下的主题中的一小部分(10%)将转移学习应用于主题独立的模型。我们比较了这三个模型的性能。我们的结果表明,转移学习(TL)模型的表现优于主题无关(IND)模型,并且与主题依赖性(DEP)模型(DEP错误:0.74 $ \ pm $ 0.002%,IND错误:11.59 $ \ \ PM $ 0.076%,TL错误:3.57 $ \ pm $ 0.02%,有10%的数据)。此外,正如预期的那样,随着剩余主题的更多数据的可用性,转移学习精度会提高。我们还通过各种传感器配置评估了意图预测系统的性能,这些传感器配置可能会在假肢应用程序中可用。我们的结果表明,假体的大腿IMU足以预测实践中的运动意图。
translated by 谷歌翻译
我们介绍了Lavis,这是一个开源深度学习库,用于语言视觉研究和应用。拉维斯(Lavis)的目标是作为一个一站式综合图书馆,它为研究人员和从业人员提供了可访问语言视觉领域的最新进步,并赋予未来的研究和发展。它具有统一的界面,可轻松访问最新的图像语言,视频语言模型和常见数据集。 Lavis支持对各种任务的培训,评估和基准测试,包括多模式分类,检索,字幕,视觉问题答案,对话和预训练。同时,该库还高度可扩展且可配置,从而促进了未来的开发和定制。在此技术报告中,我们描述了图书馆的设计原理,关键组成部分和功能,并在常见的语言视觉任务中提出基准测试结果。该库可在以下网址获得:https://github.com/salesforce/lavis。
translated by 谷歌翻译